549 research outputs found

    Smart fusion of mobile laser scanner data with large scale topographic maps

    Get PDF

    Automatic modelling of 3D trees using aerial LIDAR point cloud data and deep learning

    Get PDF
    3D tree objects can be used in various applications, like estimation of physiological equivalent temperature (PET). During this project, a method is designed to extract 3D tree objects from a country-wide point cloud. To apply this method on large scale, the algorithm needs to be efficient. Extraction of trees is done in two steps: point-wise classification using the PointNet deep learning network, and Watershed segmentation to split points into individual trees. After that, 3D tree models are made. The method is evaluated on 3 areas, a park, city center and housing block in the city of Deventer, the Netherlands. This resulted into an average accuracy of 92% and a F1-score of 0.96

    An Automatic Procedure For Mobile Laser Scanning Platform 6DOF Trajectory Adjustment

    Get PDF
    In this paper, a method is presented to improve the MLS platform’s trajectory for GNSS denied areas. The method comprises two major steps. The first step is based on a 2D image registration technique described in our previous publication. Internally, this registration technique first performs aerial to aerial image matching, this issues correspondences which enable to compute the 3D tie points by multiview triangulation. Similarly, it registers the rasterized Mobile Laser Scanning Point Cloud (MLSPC) patches with the multiple related aerial image patches. The later registration provides the correspondence between the aerial to aerial tie points and the MLSPC’s 3D points. In the second step, which is described in this paper, a procedure utilizes three kinds of observations to improve the MLS platform’s trajectory. The first type of observation is the set of 3D tie points computed automatically in the previous step (and are already available), the second type of observation is based on IMU readings and the third type of observation is soft-constraint over related pose parameters. In this situation, the 3D tie points are considered accurate and precise observations, since they provide both locally and globally strict constraints, whereas the IMU observations and soft-constraints only provide locally precise constraints. For 6DOF trajectory representation, first, the pose [R, t] parameters are converted to 6 B-spline functions over time. Then for the trajectory adjustment, the coefficients of B-splines are updated from the established observations. We tested our method on an MLS data set acquired at a test area in Rotterdam, and verified the trajectory improvement by evaluation with independently and manually measured GCPs. After the adjustment, the trajectory has achieved the accuracy of RMSE X = 9 cm, Y = 14 cm and Z = 14 cm. Analysing the error in the updated trajectory suggests that our procedure is effective at adjusting the 6DOF trajectory and to regenerate a reliable MLSPC product

    Automatic extraction of vertical walls from mobile and airborne laser scanning data

    Get PDF

    Automatic 3D building model generation using deep learning methods based on cityjson and 2D floor plans

    Get PDF
    In the past decade, a lot of effort is put into applying digital innovations to building life cycles. 3D Models have been proven to be efficient for decision making, scenario simulation and 3D data analysis during this life cycle. Creating such digital representation of a building can be a labour-intensive task, depending on the desired scale and level of detail (LOD). This research aims at creating a new automatic deep learning based method for building model reconstruction. It combines exterior and interior data sources: 1) 3D BAG, 2) archived floor plan images. To reconstruct 3D building models from the two data sources, an innovative combination of methods is proposed. In order to obtain the information needed from the floor plan images (walls, openings and labels), deep learning techniques have been used. In addition, post-processing techniques are introduced to transform the data in the required format. In order to fuse the extracted 2D data and the 3D exterior, a data fusion process is introduced. From the literature review, no prior research on automatic integration of CityGML/JSON and floor plan images has been found. Therefore, this method is a first approach to this data integration

    Vision-based indoor localization via a visual slam approach

    Get PDF
    With an increasing interest in indoor location based services, vision-based indoor localization techniques have attracted many attentions from both academia and industry. Inspired by the development of simultaneous localization and mapping technique (SLAM), we present a visual SLAM-based approach to achieve a 6 degrees of freedom (DoF) pose in indoor environment. Firstly, the indoor scene is explored by a keyframe-based global mapping technique, which generates a database from a sequence of images covering the entire scene. After the exploration, a feature vocabulary tree is trained for accelerating feature matching in the image retrieval phase, and the spatial structures obtained from the keyframes are stored. Instead of querying by a single image, a short sequence of images in the query site are used to extract both features and their relative poses, which is a local visual SLAM procedure. The relative poses of query images provide a pose graph-based geometric constraint which is used to assess the validity of image retrieval results. The final positioning result is obtained by selecting the pose of the first correct corresponding image. © Authors 2019
    • …
    corecore